>>> from genie.conf.base import Device >>> device = Device("router", os="iosxr") >>> # Hack to parse outputs without connecting to a device >>> device.custom.setdefault("abstraction", )["order"] = ["os", "platform"] >>> cmd = "show route ipv4 unicast" >>> output = """ ... Tue Oct 29 21:29:10.924 UTC ... ... O 10.13.110.0/24 [110/2] via 10.12.110.1, 5d23h, GigabitEthernet0/0/0/0.110 ... """ >>> device.parse(cmd, output=output) 'vrf': 'default': 'address_family': 'ipv4': 'routes': '10.13.110.0/24': 'route': '10.13.110.0/24', 'active': True, 'route_preference': 110, 'metric': 2, 'source_protocol': 'ospf', 'source_protocol_codes': 'O', 'next_hop': 'next_hop_list': 1: 'index': 1, 'next_hop': '10.12.110.1', 'outgoing_interface': 'GigabitEthernet0/0/0/0.110', 'updated': '5d23h'
ssh
command, it cannot
leverage my ssh_config
file: pyATS resolves hostnames before
providing them to ssh
. There is no plan to open source
pyATS.
Then, Genie Parser has two problems:
show ipv4
vrf all interface
is:
Loopback10 is Up, ipv4 protocol is Up Vrf is default (vrfid 0x60000000) Internet protocol processing disabled Loopback30 is Up, ipv4 protocol is Down [VRF in FWD reference] Vrf is ran (vrfid 0x0) Internet address is 203.0.113.17/32 MTU is 1500 (1500 is available to IP) Helper address is not set Directed broadcast forwarding is disabled Outgoing access list is not set Inbound common access list is not set, access list is not set Proxy ARP is disabled ICMP redirects are never sent ICMP unreachables are always sent ICMP mask replies are never sent Table Id is 0x0
Loopback30
and parses
the output to this structure:1
"Loopback10": "int_status": "up", "oper_status": "up", "vrf": "ran", "vrf_id": "0x0", "ipv4": "203.0.113.17/32": "ip": "203.0.113.17", "prefix_length": "32" , "mtu": 1500, "mtu_available": 1500, "broadcast_forwarding": "disabled", "proxy_arp": "disabled", "icmp_redirects": "never sent", "icmp_unreachables": "always sent", "icmp_replies": "never sent", "table_id": "0x0"
Do not add further complexity when it can be avoided. We are generally happy with the feature set of i3 and instead focus on fixing bugs and maintaining it for stability. New features will therefore only be considered if the benefit outweighs the additional complexity, and we encourage users to implement features using the IPC whenever possible. Introduction to the i3 window managerWhile this is not as powerful as an embedded language, it is enough for many cases. Moreover, as high-level features may be opinionated, delegating them to small, loosely coupled pieces of code keeps them more maintainable. Libraries exist for this purpose in several languages. Users have published many scripts to extend i3: automatic layout and window promotion to mimic the behavior of other tiling window managers, window swallowing to put a new app on top of the terminal launching it, and cycling between windows with Alt+Tab. Instead of maintaining a script for each feature, I have centralized everything into a single Python process,
i3-companion
using asyncio and the
i3ipc-python library. Each feature is self-contained into a
function. It implements the following components:
workspace_exclusive()
function monitors new windows and moves them
if needed to an empty workspace or to one with the same application
already running.quake_console()
function implements a drop-down console
available from any workspace. It can be toggled with
Mod+ . This is implemented as a scratchpad
window.workspace back_and_forth
command, we can ask i3 to
switch to the previous workspace. However, this feature is not
restricted to the current output. I prefer to have one keybinding to
switch to the workspace on the next output and one keybinding to
switch to the previous workspace on the same output. This behavior
is implemented in the previous_workspace()
function by keeping a
per-output history of the focused workspaces.workspace number
4
or move container to workspace number 4
. The new_workspace()
function finds a free number and use it as the target workspace.output_update()
function also takes an extra step to
coalesce multiple consecutive events and to check if there is a real
change with the low-level library xcffib.@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS) async def previous_workspace(i3, event): """Go to previous workspace on the same output."""
CommandEvent()
event class is my way to send a command to the
companion, using either i3-msg -t send_tick
or binding a key to a
nop
command. The latter is used to avoid spawning a shell and a
i3-msg
process just to send a message. The companion listens to
binding events and checks if this is a nop
command.
bindsym $mod+Tab nop "previous-workspace"
@debounce()
to
coalesce multiple consecutive calls, @static()
to define a static
variable, and @retry()
to retry a function on failure. The whole
script is a bit more than 1000 lines. I think this is
worth a read as I am quite happy with the result.
notify()
, to send notifications using DBus. container_info()
and
workspace_info()
uses it to display information about the container
or the tree for a workspace.
workspace_rename()
function. The icons are from
the Font Awesome project. I maintain a mapping between applications
and icons. This is a bit cumbersome but it looks great.
For CPU, memory, brightness, battery, disk, and audio volume, I am
relying on the built-in modules. Polybar s wrapper script generates the list of filesystems to monitor and they get only
displayed when available space is low. The battery widget turns red
and blinks slowly when running out of power. Check my Polybar
configuration for more details.
For Bluetooh, network, and notification statuses, I am using Polybar s
ipc
module: the next version of Polybar can receive
an arbitrary text on an IPC socket. The module is defined with a
single hook to be executed at the start to restore the latest status.
[module/network] type = custom/ipc hook-0 = cat $XDG_RUNTIME_DIR/i3/network.txt 2> /dev/null initial = 1
polybar-msg action "#network.send.XXXX"
. In
the i3 companion, the @polybar()
decorator takes the string
returned by a function and pushes the update through the IPC socket.
The i3 companion reacts to DBus signals to update the Bluetooth and
network icons. The @on()
decorator accepts a DBusSignal()
object:
@on( StartEvent, DBusSignal( path="/org/bluez", interface="org.freedesktop.DBus.Properties", member="PropertiesChanged", signature="sa sv as", onlyif=lambda args: ( args[0] == "org.bluez.Device1" and "Connected" in args[1] or args[0] == "org.bluez.Adapter1" and "Powered" in args[1] ), ), ) @retry(2) @debounce(0.2) @polybar("bluetooth") async def bluetooth_status(i3, event, *args): """Update bluetooth status for Polybar."""
~/.xsession-errors
file.3
I am using a two-stage setup: i3.service
depends on
xsession.target
to start services before
i3:
[Unit] Description=X session BindsTo=graphical-session.target Wants=autorandr.service Wants=dunst.socket Wants=inputplug.service Wants=picom.service Wants=pulseaudio.socket Wants=policykit-agent.service Wants=redshift.service Wants=spotify-clean.timer Wants=ssh-agent.service Wants=xiccd.service Wants=xsettingsd.service Wants=xss-lock.service
i3-session.target
:
[Unit] Description=i3 session BindsTo=graphical-session.target Wants=wallpaper.service Wants=wallpaper.timer Wants=polybar-weather.service Wants=polybar-weather.timer Wants=polybar.service Wants=i3-companion.service Wants=misc-x.service
xset s
command. The locker can be invoked immediately with xset s activate
.
X11 applications know how to prevent the screen saver from running. I
have also developed a small dimmer application that is executed 20
seconds before the locker to give me a chance to move the mouse if I
am not away.4 Have a look at my configuration
script.
xrandr
.
:0
and :1
. In the first implementation, I did try to
parametrize each service with the associated display, but this is
useless: there is only one DBus user session and many services
rely on it. For example, you cannot run two notification daemons.
firmware-sof-signed
package.3
The BIOS can be updated directly from Linux, thanks to the Linux
Vendor Firmware Service.
I was using the ThinkPad USB-C Dock Gen 2 as a docking
station. Everything works out-of-the-box. However, from time to
time, I got issues reliably getting an image on the two screens. I was
using a couple of 10-year LG monitors with DVI connectors, so I relied
on DP HDMI DVI adapters. This may have been the source of some of the
problems.
release
branch. I have now decided to publish the currentcode of QSoas in the github repository (in the
public
branch). This way, you can follow and use all the good things that were developed since the last release, and also verify whether any bug you have is still present in the currently developed version !eval
;Before | After | |
---|---|---|
CPU | Intel i5-4670K @ 3.4 GHz | AMD Ryzen 5 5600X @ 3.7 GHz |
CPU fan | Zalman CNPS9900 | Noctua NH-U12S |
Motherboard | Asus Z97-PRO Gamer | Asus TUF Gaming B550-PLUS |
RAM | 2 8 GB + 2 4 GB DDR3 @ 1.6 GHz | 2 16 GB DDR4 @ 3.6 GHz |
GPU | Asus Radeon PH RX 550 4G M7 | |
Disks | 500 GB Crucial P2 NVMe 256 GB Samsung SSD 850 256 GB Samsung SSD 840 |
|
PSU | be quiet! Pure Power CM L8 @ 530 W | |
Case | Antec P100 |
$ lscpu -e CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 3800.0000 800.0000 1 0 0 1 1:1:1:0 yes 3800.0000 800.0000 2 0 0 2 2:2:2:0 yes 3800.0000 800.0000 3 0 0 3 3:3:3:0 yes 3800.0000 800.0000 $ CCACHE_DISABLE=1 =time -f ' %E' make -j$(nproc) [ ] OBJCOPY arch/x86/boot/vmlinux.bin AS arch/x86/boot/header.o LD arch/x86/boot/setup.elf OBJCOPY arch/x86/boot/setup.bin BUILD arch/x86/boot/bzImage Kernel: arch/x86/boot/bzImage is ready (#1) 4:54.32
$ lscpu -e CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 5210.3511 2200.0000 1 0 0 1 1:1:1:0 yes 4650.2920 2200.0000 2 0 0 2 2:2:2:0 yes 5210.3511 2200.0000 3 0 0 3 3:3:3:0 yes 5073.0459 2200.0000 4 0 0 4 4:4:4:0 yes 4932.1279 2200.0000 5 0 0 5 5:5:5:0 yes 4791.2100 2200.0000 6 0 0 0 0:0:0:0 yes 5210.3511 2200.0000 7 0 0 1 1:1:1:0 yes 4650.2920 2200.0000 8 0 0 2 2:2:2:0 yes 5210.3511 2200.0000 9 0 0 3 3:3:3:0 yes 5073.0459 2200.0000 10 0 0 4 4:4:4:0 yes 4932.1279 2200.0000 11 0 0 5 5:5:5:0 yes 4791.2100 2200.0000 $ CCACHE_DISABLE=1 =time -f ' %E' make -j$(nproc) [ ] OBJCOPY arch/x86/boot/vmlinux.bin AS arch/x86/boot/header.o LD arch/x86/boot/setup.elf OBJCOPY arch/x86/boot/setup.bin BUILD arch/x86/boot/bzImage Kernel: arch/x86/boot/bzImage is ready (#1) 1:40.18
make defconfig
on commit 15fae3410f1d.
split-on-values
. The first solution is the one that came naturally to me, and is by far the most general and extensible, but the second one is shorter, and doesn't require external script files.
QSoas> load kcat-vs-ph.dat QSoas> split-on-values pH x /flags=dataAfter these commands, the stacks contains a series of datasets bearing the
data
flag, that each contain a single column of data, as can be seen from the beginnings of a show-stack command:
QSoas> k Normal stack: F C Rows Segs Name #0 (*) 1 43 1 'kcat-vs-ph_subset_22.dat' #1 (*) 1 44 1 'kcat-vs-ph_subset_21.dat' #2 (*) 1 43 1 'kcat-vs-ph_subset_20.dat' ...Each of these datasets have a meta-data named
pH
whose value is the original x value from kcat-vs-ph.dat
. Now, the idea is to run a stats
command on the resulting datasets, extracting the average value of x and its standard deviation, together with the value of the meta pH
. The most natural and general way to do this is to use run-for-datasets
, using the following script file (named process-one.cmds
):
stats /meta=pH /output=true /stats=x_average,x_stddevSo the command looks like:
QSoas> run-for-datasets process-one.cmds flagged:dataThis command produces an output file containing, for each flagged dataset, a line containing
x_average
, x_stddev
, and pH
. Then, it is just a matter of loading the output file and shuffling the columns in the right order to get the data in the form asked. Overall, this looks like this:
l kcat-vs-ph.dat split-on-values pH x /flags=data output result.dat /overwrite=true run-for-datasets process-one.cmds flagged:data l result.dat apply-formula tmp=y2;y2=y;y=x;x=tmp dataset-options /yerrors=y2The slight improvement over what is described above is the use of the
output
command to write the output to a dedicated file (here result.dat
), instead of out.dat
and ensuring it is overwritten, so that no data remains from previous runs.
stats
command can work with datasets other than the current one, by supplying them to the /buffers=
option, so that it is not necessary to use run-for-datasets
;l kcat-vs-ph.dat split-on-values pH x /flags=data stats /meta=pH /accumulate=* /stats=x_average,x_stddev /buffers=flagged:data pop apply-formula tmp=y2;y2=y;y=x;x=tmp dataset-options /yerrors=y2
Your browser supports WebP and AVIF image formats. Your browser supports none of these image formats. Your browser only supports the WebP image format. Your browser only supports the AVIF image format.
Without JavaScript, I can t tell what your browser supports.
find media/images -type f -name '*.jpg' -print0 \ xargs -0n1 -P$(nproc) -i \ cwebp -q 84 -af ' ' -o ' '.webp
avifenc
from libavif:
find media/images -type f -name '*.jpg' -print0 \ xargs -0n1 -P$(nproc) -i \ avifenc --codec aom --yuv 420 --min 20 --max 25 ' ' ' '.avif
jpegoptim=$(nix-build --no-out-link \ -E 'with (import <nixpkgs> ); jpegoptim.override libjpeg = mozjpeg; ') find media/images -type f -name '*.jpg' -print0 \ sort -z xargs -0n10 -P$(nproc) \ $ jpegoptim /bin/jpegoptim --max=84 --all-progressive --strip-all
find media/images -type f -name '*.png' -print0 \ sort -z xargs -0n10 -P$(nproc) \ pngquant --skip-if-larger --strip \ --quiet --ext .png --force
cwebp
in lossless mode:
find media/images -type f -name '*.png' -print0 \ xargs -0n1 -P$(nproc) -i \ cwebp -z 8 ' ' -o ' '.webp
pngquant
and lossy compression is only marginally
better than what I get with WebP.
for f in media/images/**/*. webp,avif ; do orig=$(stat --format %s $ f%.* ) new=$(stat --format %s $f) (( orig*0.90 > new )) rm $f done
for f in media/images/**/*.avif; do [[ -f $ f%.* .webp ]] continue orig=$(stat --format %s $ f%.* .webp) new=$(stat --format %s $f) (( $orig > $new )) rm $f done
printf " %10s %10s %10s\n" Original WebP AVIF for format in png jpg; do printf " $ format:u %10s %10s %10s\n" \ $(find media/images -name "*.$format" wc -l) \ $(find media/images -name "*.$format.webp" wc -l) \ $(find media/images -name "*.$format.avif" wc -l) done
Original WebP AVIF PNG 64 47 0 JPG 83 40 74
<picture>
to let the browser pick the format it supports, orAccept
HTTP
header in the request. For Chrome, it looks like this:
Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8
http map $http_accept $webp_suffix default ""; "~image/webp" ".webp"; map $http_accept $avif_suffix default ""; "~image/avif" ".avif"; server # [ ] location ~ ^/images/.*\.(png jpe?g)$ add_header Vary Accept; try_files $uri$avif_suffix$webp_suffix $uri$avif_suffix $uri$webp_suffix $uri =404;
/images/ont-box-orange@2x.jpg
. If it supports WebP but not AVIF,
$webp_suffix
is set to .webp
while $avif_suffix
is set to the
empty string. The server tries to serve the first existing file in
this list:
/images/ont-box-orange@2x.jpg.webp
/images/ont-box-orange@2x.jpg
/images/ont-box-orange@2x.jpg.webp
/images/ont-box-orange@2x.jpg
/images/ont-box-orange@2x.jpg.webp.avif
(it never exists)/images/ont-box-orange@2x.jpg.avif
/images/ont-box-orange@2x.jpg.webp
/images/ont-box-orange@2x.jpg
Vary
header ensures an intermediary cache (a proxy
or a CDN) checks the Accept
header before using a cached
response. Internet Explorer has trouble with this header and may
not be able to cache the resource properly. There is a
workaround but Internet Explorer s market share is now so
small that it is pointless to implement it.
pH value
data: the X column is the pH, the Y column the result of the experiment at the given pH (let's say the measure of the catalytic rate of an enzyme). Your task is to take this data and produce a single dataset which contains, for each pH value, the pH, the average of the results at that pH and the standard deviation. The result should be identical to the following file, and should look like that:
There are several ways to do this, but all ways must rely on stats
, and the more natural way in QSoas is to take advantage of split-on-values
, which is a very powerful command but somehow hard to master, which is the point of this Quiz.
devices.yaml
. It contains the
device list. The second file is classifier.yaml
.
It defines a scope for each device. A scope is a set of keys and
values. It is used in templates and to look up data associated with a
device.
$ ./run-jerikan scope to1-p1.sk1.blade-group.net continent: apac environment: prod groups: - tor - tor-bgp - tor-bgp-compute host: to1-p1.sk1 location: sk1 member: '1' model: dell-s4048 os: cumulus pod: '1' shorthost: to1-p1
to1-p1.sk1.blade-group.net
, the following subset of
classifier.yaml
defines its scope:
matchers: - '^(([^.]*)\..*)\.blade-group\.net': environment: prod host: '\1' shorthost: '\2' - '\.(sk1)\.': location: '\1' continent: apac - '^to([12])-[as]?p(\d+)\.': member: '\1' pod: '\2' - '^to[12]-p\d+\.': groups: - tor - tor-bgp - tor-bgp-compute - '^to[12]-(p ap)\d+\.sk1\.': os: cumulus model: dell-s4048
searchpaths.py
. It describes
which directories to search for a variable. A Python function provides
a list of paths to look up in data/
for a given scope. Here
is a simplified version:2
def searchpaths(scope): paths = [ "host/ scope[location] / scope[shorthost] ", "location/ scope[location] ", "os/ scope[os] - scope[model] ", "os/ scope[os] ", 'common' ] for idx in range(len(paths)): try: paths[idx] = paths[idx].format(scope=scope) except KeyError: paths[idx] = None return [path for path in paths if path]
to1-p1.sk1.blade-group.net
is
looked up in the following paths:
$ ./run-jerikan scope to1-p1.sk1.blade-group.net [ ] Search paths: host/sk1/to1-p1 location/sk1 os/cumulus-dell-s4048 os/cumulus common
system
for accounts, DNS, syslog servers, topology
for ports, interfaces, IP addresses, subnets, bgp
for BGP configurationbuild
for templates and validation scriptsapps
for application variablesto1-p1.sk1.blade-group.net
in the bgp
namespace, the following
YAML files are processed: host/sk1/to1-p1/bgp.yaml
,
location/sk1/bgp.yaml
, os/cumulus-dell-s4048/bgp.yaml
,
os/cumulus/bgp.yaml
, and common/bgp.yaml
. The search stops at the
first match.
The schema.yaml
file allows us to override this
behavior by asking to merge dictionaries and arrays across all
matching files. Here is an excerpt of this file for the topology
namespace:
system: users: merge: hash sampling: merge: hash ansible-vars: merge: hash netbox: merge: hash
~
:
# In data/os/junos/system.yaml netbox: manufacturer: Juniper model: "~ model upper " # In data/groups/tor-bgp-compute/system.yaml netbox: role: net_tor_gpu_switch
netbox
in the system
namespace for
to1-p2.ussfo03.blade-group.net
yields the following result:
$ ./run-jerikan scope to1-p2.ussfo03.blade-group.net continent: us environment: prod groups: - tor - tor-bgp - tor-bgp-compute host: to1-p2.ussfo03 location: ussfo03 member: '1' model: qfx5110-48s os: junos pod: '2' shorthost: to1-p2 [ ] Search paths: [ ] groups/tor-bgp-compute [ ] os/junos common $ ./run-jerikan lookup to1-p2.ussfo03.blade-group.net system netbox manufacturer: Juniper model: QFX5110-48S role: net_tor_gpu_switch
# In groups/adm-gateway/topology.yaml interface-rescue: address: "~ lookup('topology', 'addresses').rescue " up: - "~ip route add default via lookup('topology', 'addresses').rescue ipaddr('first_usable') table rescue" - "~ip rule add from lookup('topology', 'addresses').rescue ipaddr('address') table rescue priority 10" # In groups/adm-gateway-sk1/topology.yaml interfaces: ens1f0: "~ lookup('topology', 'interface-rescue') "
$ ./run-jerikan lookup gateway1.sk1.blade-group.net topology interfaces [ ] ens1f0: address: 121.78.242.10/29 up: - ip route add default via 121.78.242.9 table rescue - ip rule add from 121.78.242.10 table rescue priority 10
peers: transit: cogent: asn: 174 remote: - 38.140.30.233 - 2001:550:2:B::1F9:1 specific-import: - name: ATT-US as-path: ".*7018$" lp-delta: 50 ix-sfmix: rs-sfmix: monitored: true asn: 63055 remote: - 206.197.187.253 - 206.197.187.254 - 2001:504:30::ba06:3055:1 - 2001:504:30::ba06:3055:2 blizzard: asn: 57976 remote: - 206.197.187.42 - 2001:504:30::ba05:7976:1 irr: AS-BLIZZARD
build
namespace:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build templates data.yaml: data.j2 config.txt: junos/main.j2 config-base.txt: junos/base.j2 config-irr.txt: junos/irr.j2 $ ./run-jerikan lookup to1-p1.ussfo03.blade-group.net build templates data.yaml: data.j2 config.txt: cumulus/main.j2 frr.conf: cumulus/frr.j2 interfaces.conf: cumulus/interfaces.j2 ports.conf: cumulus/ports.j2 dhcpd.conf: cumulus/dhcp.j2 default-isc-dhcp: cumulus/default-isc-dhcp.j2 authorized_keys: cumulus/authorized-keys.j2 motd: linux/motd.j2 acl.rules: cumulus/acl.j2 rsyslog.conf: cumulus/rsyslog.conf.j2
ipaddr
. Here is an excerpt of
templates/junos/base.j2
to configure DNS
and NTP servers on Juniper devices:
system ntp % for ntp in lookup("system", "ntp") % server ntp ; % endfor % name-server % for dns in lookup("system", "dns") % dns ; % endfor %
% for dns in lookup('system', 'dns') % domain vrf VRF-MANAGEMENT name-server dns % endfor % ! % for syslog in lookup('system', 'syslog') % logging syslog vrf VRF-MANAGEMENT % endfor % !
devices()
returns the list of devices matching a set of
conditions on the scope. For example, devices("location==ussfo03",
"groups==tor-bgp")
returns the list of devices in San Francisco in
the tor-bgp
group. You can also omit the operator if you want the
specified value to be equal to the one in the local scope. For
example, devices("location")
returns devices in the current
location.lookup()
does a key lookup. It takes the namespace, the key, and
optionally, a device name. If not provided, the current device
is assumed.scope()
returns the scope of the provided device.% for neighbor in devices("location", "groups==edge") if neighbor != device % % for address in lookup("topology", "addresses", neighbor).loopback tolist % protocols bgp group IPV address ipv -EDGES-IBGP neighbor address description "IPv address ipv : iBGP to neighbor "; % endfor % % endfor %
store()
as a filter:
interface Loopback0 description 'Loopback:' % for address in lookup('topology', 'addresses').loopback tolist % ipv address ipv address address store('addresses', 'Loopback0') ipaddr('cidr') % endfor % !
store()
:4
% for device, ip, interface in store('addresses') % % set interface = interface replace('/', '-') replace('.', '-') replace(':', '-') % % set name = ' . '.format(interface lower, device) % name . IN 'A' if ip ipv4 else 'AAAA' ip ipaddr('address') % endfor %
./run-jerikan build
. The
--limit
argument restricts the devices to generate configuration
files for. Build is not done in parallel because a template may depend
on the data collected by another template. Currently, it takes 1
minute to compile around 3000 files spanning over 800 devices.
When an error occurs, a detailed traceback is displayed, including the
template name, the line number and the value of all visible variables.
This is a major time-saver compared to Ansible!
templates/opengear/config.j2:15: in top-level template code config.interfaces. interface .netmask adddress ipaddr("netmask") continent = 'us' device = 'con1-ag2.ussfo03.blade-group.net' environment = 'prod' host = 'con1-ag2.ussfo03' infos = 'address': '172.30.24.19/21' interface = 'wan' location = 'ussfo03' loop = <LoopContext 1/2> member = '2' model = 'cm7132-2-dac' os = 'opengear' shorthost = 'con1-ag2' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = JerkianUndefined, query = 'netmask', version = False, alias = 'ipaddr' [ ] # Check if value is a list and parse each element if isinstance(value, (list, tuple, types.GeneratorType)): _ret = [ipaddr(element, str(query), version) for element in value] return [item for item in _ret if item] > elif not value or value is True: E jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'adddress'
jerikan/jinja.py
. Mastering
Jinja2 is a good investment. Take time to browse through our
templates as some of them show interesting features.
checks/
directory. Jerikan looks up the key checks
in the build
namespace to know which checks to run:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build checks - description: Juniper configuration file syntax check script: checks/junoser cache: input: config.txt output: config-set.txt - description: check YAML data script: checks/data.yaml cache: data.yaml
checks/junoser
is executed if there is a
change to the generated config.txt
file. It also outputs a
transformed version of the configuration file which is easier to
understand when using diff
. Junoser checks a Junos configuration
file using Juniper s XML schema definition for Netconf.5 On
error, Jerikan displays:
jerikan/build.py:127: RuntimeError -------------- Captured syntax check with Junoser call -------------- P: checks/junoser edge2.ussfo03.blade-group.net C: /app/jerikan O: E: Invalid syntax: set system syslog archive size 10m files 10 word-readable S: 1
.gitlab-ci.yml
file. When we need to make a change, we create a dedicated branch and
a merge request. GitLab compiles the templates using the same
environment we use on our laptops and store them as an artifact.
Before approving the merge request, another team member looks at the
changes in data and templates but also the differences for the
generated configuration files:
ob1-n1.sk1.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios ob2-n1.sk1.blade-group.net ansible_host=172.29.15.13 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios ob1-n1.ussfo03.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios none ansible_connection=local [oob] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net [os-ios] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net [model-c2960s] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net [location-sk1] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net [location-ussfo03] ob1-n1.ussfo03.blade-group.net [in-sync] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net none
in-sync
is a special group for devices which configuration should
match the golden configuration. Daily and unattended, Ansible should
be able to push configurations to this group. The mid-term goal is to
cover all devices.
none
is a special device for tasks not related to a specific host.
This includes synchronizing NetBox, IRR objects, and the DNS,
updating the RPKI, and building the geofeed files.
ansible/playbooks/site.yaml
file.
Here is a shortened version:
- hosts: adm-gateway:!done strategy: mitogen_linear roles: - blade.linux - blade.adm-gateway - done - hosts: os-linux:!done strategy: mitogen_linear roles: - blade.linux - done - hosts: os-junos:!done gather_facts: false roles: - blade.junos - done - hosts: os-opengear:!done gather_facts: false roles: - blade.opengear - done - hosts: none:!done gather_facts: false roles: - blade.none - done
blade.junos
role. Once a play has been executed, the
device is added to the done
group and the other plays are skipped.
The playbook can be executed with the configuration files generated by
the GitLab CI using the ./run-ansible-gitlab
command. This is a
wrapper around Docker and the ansible-playbook
command and it
accepts the same arguments. To deploy the configuration on the edge
devices for the SK1 datacenter in check mode, we use:
$ ./run-ansible-gitlab playbooks/site.yaml --limit='edge:&location-sk1' --diff --check [ ] PLAY RECAP ************************************************************* edge1.sk1.blade-group.net : ok=6 changed=0 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 edge2.sk1.blade-group.net : ok=5 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
--check
must detect if a change is needed;--diff
must provide a visualization of the planned changes;--check
and --diff
must not display anything if there is nothing to change;cisco.iosxr
collection. The quality of Ansible
Galaxy collections is quite random and it is an additional
maintenance burden. It seems better to write roles tailored to our
needs. The collections we use are in
ci/ansible/ansible-galaxy.yaml
. We use
Mitogen to get a 10 speedup on Ansible executions on Linux
hosts.
We also have a few playbooks for operational purpose: upgrading the OS
version, isolate an edge router, etc. We were also planning on how to
add operational checks in roles: are all the BGP sessions up? They
could have been used to validate a deployment and rollback if there is
an issue.
Currently, our playbooks are run from our laptops. To keep tabs, we
are using ARA. A weekly dry-run on devices in the in-sync
group
also provides a dashboard on which devices we need to run Ansible
on.
$ ansible --version ansible 2.10.8 [ ] $ cat test.j2 Hello name ! $ ansible all -i localhost, \ > --connection=local \ > -m template \ > -a "src=test.j2 dest=test.txt" localhost FAILED! => "changed": false, "msg": "AnsibleUndefinedVariable: 'name' is undefined"
jerikan/jinja.py
. This is a remain of the
fact we do not maintain Jerikan as a standalone software.
store()
filter and a
store()
function. With Jinja2, filters and functions live in
two different namespaces.
ansible/roles/blade.linux/tasks/firewall.yaml
and
ansible/roles/blade.linux/tasks/interfaces.yaml
.
They are meant to be called when needed, using import_role
.
$_vbe_prompt_compact
set to 1 when we want a compact prompt. We use the
following function to define the prompt appearance:
_vbe_prompt () local retval=$? # When compact, just time + prompt sign if (( $_vbe_prompt_compact )); then # Current time (with timezone for remote hosts) _vbe_prompt_segment cyan default "%D %H:%M$ SSH_TTY+ %Z " # Hostname for remote hosts [[ $SSH_TTY ]] && \ _vbe_prompt_segment black magenta "%B%M%b" # Status of the last command if (( $retval )); then _vbe_prompt_segment red default $ PRCH[reta] else _vbe_prompt_segment green cyan $ PRCH[ok] fi # End of prompt _vbe_prompt_end return fi # Regular prompt with many information # [ ] setopt prompt_subst PS1='$(_vbe_prompt) '
Update (2021.05) The following part has been rewritten to be more robust. The code is stolen from Powerlevel10k s issue #888. See the comments for more details.
_vbe-zle-line-init() [[ $CONTEXT == start ]] return 0 # Start regular line editor (( $+zle_bracketed_paste )) && print -r -n - $zle_bracketed_paste[1] zle .recursive-edit local -i ret=$? (( $+zle_bracketed_paste )) && print -r -n - $zle_bracketed_paste[2] # If we received EOT, we exit the shell if [[ $ret == 0 && $KEYS == $'\4' ]]; then _vbe_prompt_compact=1 zle .reset-prompt exit fi # Line edition is over. Shorten the current prompt. _vbe_prompt_compact=1 zle .reset-prompt unset _vbe_prompt_compact if (( ret )); then # Ctrl-C zle .send-break else # Enter zle .accept-line fi return ret zle -N zle-line-init _vbe-zle-line-init
bind-key -T copy-mode M-w \ send -X copy-pipe-and-cancel "sed 's/ .* /%/g' xclip -i -selection clipboard" \;\ display-message "Selection saved to clipboard!"
14:21 % ssh eizo.luffy.cx Linux eizo 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 Last login: Fri Apr 23 14:20:39 2021 from 2a01:cb00:3f:b02:9db6:efa4:d85:7f9f 14:21 CEST % uname -a Linux eizo 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux 14:21 CEST % Connection to eizo.luffy.cx closed. 14:22 % git status On branch article/zsh-transient Untracked files: (use "git add <file>..." to include in what will be committed) ../../media/images/zsh-compact-prompt@2x.jpg nothing added to commit but untracked files present (use "git add" to track)
QSoas> cd QSoas> load 27.oxw QSoas> load 27-blanc.oxw QSoas> S 1 0(after the first command, you have to manually select the directory in which you downloaded the data files). The
S 1 0
command just subtracts the dataset 1 (the first loaded) from the dataset 0 (the last loaded), see more there. blancis the French for
blank... Then, we remove a bit of the beginning and the end of the data, corresponding to one half of the steps at \(E_0\), which we don't exploit much here (they are essentially only used to make sure that the irreversible loss is taken care of properly). This is done using strip-if:
QSoas> strip-if x<30 x>300Then, we can fit ! The fit used is called fit-linear-kinetic-system, which is used to fit kinetic models with only linear reactions (like here) and steps which change the values of the rate constants but do not instantly change the concentrations. The specific command to fit the data is:
QSoas> fit-linear-kinetic-system /species=2 /steps=0,1,2,1,0The
/species=2
indicates that there are two species (A and I). The /steps=0,1,2,1,0
indicates that there are 5 steps, with three different conditions (0 to 2) in order 0,1,2,1,0. This fits needs a bit of setup before getting started. The species are numbered, 1 and 2, and the conditions (potentials) are indicated by #0
, #1
and #2
suffixes.
I_1
and I_2
are the currents for the species 1 and 2, so something for 1 (active form) and 0 for 2 (inactive form).
Moreover, the parameters I_2_#0
(and _#1
, _#2
) should be fixed and not free (since we don't need to adjust a current for the inactive form).k_11
and k_22
correspond to species-specific irreversible loss. It is generally best to leave them fixed to 0.k_12
is the formation of 2 (I) from 1 (A), and k_21
is the formation of A from I. Their values will be determined for the three conditions. The default values should work here.k_loss
parameters are the rates of irreversible loss that apply indiscriminately on all species (unlike k_11
and k_22
). They are adjusted and ther default values should work too.alpha_1_0
and alpha_2_0
are the initial concentrations of species 1 and 2, so they should be fixed to 1 and 0.xstart_a
and (_b
, _c
, _d
and _e
) correspond to the starting times for the steps, here, 0, 60, 120, 210 and 270.starting-parameters.params
parameters to have all setup the correct way. Then, just hit Fit, enjoy this moment when QSoas works and you don't have to... The screen should now look like this: Now, it's done ! The fit is actually pretty good, and you can read the values of the inactivation and reactivation rate constants from the fit parameters. You can train also on the
21.oxw
and 21-blanc.oxw
files. Usually, re-loading the best fit parameters from other potentials as starting parameters work really well. Gathering the results of several fits into a real curve of rate constants as a function of potentials is left as an exercise for the reader (or maybe a later post), although you may find these series of posts useful in this context !
fits):
jaggydata.
meta-data. In a previous post, we have already described how one can set meta-data to data that are already loaded, and how one can make use of them. QSoas is already able to figure out some meta-data in the case of electrochemical data, most notably in the case of files acquired by GPES, ECLab or CHI potentiostats. However, only a small number of constructors are supported as of now[1], and there are a number of experimental details that the software is never going to be able to figure out for you, such as the pH, the sample, what you were doing... The new version of QSoas provides a means to permanently store meta-data for experimental data files:
QSoas> record-meta pH 7 file.datThis command uses record-meta to permanently store the information
pH = 7for the file
file.dat
. Any time QSoas loads the file again, either today or in one year, the meta-data will contain the value 7 for the field pH. Behind the scenes, QSoas creates a single small file, file.dat.qsm
, in which the meta-data are stored (in the form of a JSON dictionnary).
You can set the same meta-data to many files in one go, using wildcards (see load for more information). For instance, to set the pH=7 meta-data to all the .dat
files in the current directory, you can use:
QSoas> record-meta pH 7 *.datYou can only set one meta-data for each call to record-meta, but you can use it as many times as you like. Finally, you can use the
/for-which
option to load or browse to select only the files which have the meta you need:
QSoas> browse /for-which=$meta.pH<=7This command browses the files in the current directory, showing only the ones that have a pH meta-data which is 7 or below.
[1] I'm always ready to implement the parsing of other file formats that could be useful for you. If you need parsing of special files, please contact me, sending the given files and the meta-data you'd expect to find in those. About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.
curl -fsSL https://example.com/stable/debian.gpg sudo apt-key add -Do not follow this, for different reasons, including:
% curl -fsSL -o buster.gpg https://pkgs.tailscale.com/stable/debian/buster.gpg % gpg --keyid-format long buster.gpg gpg: WARNING: no command supplied. Trying to guess what you mean ... pub rsa4096/458CA832957F5868 2020-02-25 [SC] 2596A99EAAB33821893C0A79458CA832957F5868 uid Tailscale Inc. (Package repository signing key) <info@tailscale.com> sub rsa4096/B1547A3DDAAF03C6 2020-02-25 [E] % file buster.gpg buster.gpg: PGP public key block Public-Key (old)If you have apt version >= 1.4 available (Debian >=stretch/9 and Ubuntu >=bionic/18.04), you can use this file directly as follows:
% sudo mv buster.gpg /usr/share/keyrings/tailscale.asc % cat /etc/apt/sources.list.d/tailscale.list deb [signed-by=/usr/share/keyrings/tailscale.asc] https://pkgs.tailscale.com/stable/debian buster main % sudo apt update [...]And you re done! Iff your apt version really is older than 1.4, you need to convert the ascii-armored GPG file into a GPG key public ring file (AKA binary OpenPGP format), either by just dearmor-ing it (if you don t care about checking ID + fingerprint):
% gpg --dearmor < buster.gpg > tailscale.gpgor if you prefer to go via GPG, you can also use a temporary GPG home directory (if you don t care about going through your personal GPG setup):
% mkdir --mode=700 /tmp/gpg-tmpdir % gpg --homedir /tmp/gpg-tmpdir --import ./buster.gpg gpg: keybox '/tmp/gpg-tmpdir/pubring.kbx' created gpg: /tmp/gpg-tmpdir/trustdb.gpg: trustdb created gpg: key 458CA832957F5868: public key "Tailscale Inc. (Package repository signing key) <info@tailscale.com>" imported gpg: Total number processed: 1 gpg: imported: 1 % gpg --homedir /tmp/gpg-tmpdir --output tailscale.gpg --export-options=export-minimal --export 0x458CA832957F5868 % rm -rf /tmp/gpg-tmpdirThe resulting GPG key public ring file should look like that:
% file tailscale.gpg tailscale.gpg: PGP/GPG key public ring (v4) created Tue Feb 25 04:51:20 2020 RSA (Encrypt or Sign) 4096 bits MPI=0xc00399b10bc12858... % gpg tailscale.gpg gpg: WARNING: no command supplied. Trying to guess what you mean ... pub rsa4096/458CA832957F5868 2020-02-25 [SC] 2596A99EAAB33821893C0A79458CA832957F5868 uid Tailscale Inc. (Package repository signing key) <info@tailscale.com> sub rsa4096/B1547A3DDAAF03C6 2020-02-25 [E]Then you can use this GPG file on your system as follows:
% sudo mv tailscale.gpg /usr/share/keyrings/tailscale.gpg % cat /etc/apt/sources.list.d/tailscale.list deb [signed-by=/usr/share/keyrings/tailscale.gpg] https://pkgs.tailscale.com/stable/debian buster main % sudo apt update [...]Such a setup ensures:
extrepo
, and then running extrepo enable <reponame>
, where <reponame>
is the name of the repository.
Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo
is already quite useful in its current state:
debian_official
, debian_backports
, and debian_experimental
repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo
, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org
alias for CDN-backed package mirrors.belgium_eid
repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.elastic
: the elasticsearch software.dovecot
, winehq
and bareos
contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.sury
, fai
, and postgresql
repositories, as well as a number of repositories such as openstack_rocky
, openstack_train
, haproxy-1.5
and haproxy-2.0
(there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury
repository, that is PHP; for the others, the name should give it away.
The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.vscodium
repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium
version of Visual Studio Code is to code
as the chromium
browser is to chrome
: it is a build of the same softare, but without the non-free bits that make code
not entirely Free Software.extrepo
, too. The iridiumbrowser
repository contains a Chromium-based browser that focuses on privacy.torproject
repository.kubernetes
repository that contains the Kubernetes stack, the as well as the google_cloud
one containing the Google Cloud SDK.extrepo
, please note that non-free
and contrib
repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml
.
vscode
repository contains it.msteams
repository. And, hey, skype
.opera
and google_chrome
.docker-ce
repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...steam
repository.$
sign. Using this, we can keep track of the previous values. For instance, to create a new column with the sum of the y
values, one can use the following approach:
QSoas> eval $sum=0 QSoas> apply-formula /extra-columns=1 $sum+=y;y2=$sumThe first line initializes the variable to 0, before we start summing, and the code in the second line is run for each dataset row, in order. For the first row, for instance,
$sum
is initially 0 (from the eval
line); after the execution of the code, it is now the first value of y
. After the second row, the second value of y
is added, and so on. The image below shows the resulting y2
when used on:
QSoas> generate-dataset -1 1 x
## time current potential 0 0.1 0.5 1 0.2 2 0.3 3 0.2 4 1.2 0.6 5 1.3 ...If you need to have the values everywhere, for instance if you need to split on their values, you could also use a global variable, taking advantage of the fact that missing values are represented by QSoas using "Not A Number" values, which can be detected using the Ruby function
nan?
:
QSoas> apply-formula "if y2.nan?; then y2=$value; else $value=y2;end"Note the need of quotes because there are spaces in the ruby code. If the value of
y2
is NaN, that is it is missing, then it is taken from the global variable $value
else $value
is set the current value of y2
. Hence, the values are propagated down:
## time current potential 0 0.1 0.5 1 0.2 0.5 2 0.3 0.5 3 0.2 0.5 4 1.2 0.6 5 1.3 0.6 ...Of course, this doesn't work if the first value of
y2
is missing.
/mode=rename
option, does only the renaming part, without saving. You can make full use of meta-data (see also a first post here)for renaming. The full power is unlocked using the /expression=
option. For instance, for renaming the last 5 datasets (so numbers 0 to 4) using a scheme based on the value of their pH
meta-data, you can use the following code:
QSoas> save-datasets /mode=rename /expression='"dataset-# $meta.pH "' 0..4The double quotes are cumbersome but necessary, since the outer quotes (
'
) prevent the inner ones ("
) to be removed and the inner quotes are here to indicate to Ruby that we are dealing with text. The bit inside # ...
is interpreted by Ruby as Ruby code; here it is $meta.pH
, the value of the "pH" meta-data. Finally the 0..4
specifies the datasets to work with. So theses datasets will change name to become dataset-7
for pH 7, etc...expertmode for fitting Undoubtedly the most important feature in the new version is a complete upgrade of the fit system, which now features an
expertmode, turned on by using the
/expert=true
option with the fit commands. The expert mode features a command prompt that looks like the normal command prompt, in which it is possible:
parameter space exploration, which consists in running (automatically) a number of fits with different (random) starting parameters to maximize the chances of finding out the best parameters for the fit, avoiding local minima.
Show randomto learn new tricks !). Many other features For the full list of changes, please see the changelog. Apart from the changes described above, these are my favorites:
apply-formula
, like automatic code completion;record-meta
;set-meta
command:
QSoas> set-meta pH 7.5This command sets the meta-data
pH
to the value 7.5
. Keep in mind that QSoas does not know anything about the meaning of the meta-data[1]. It can keep track of the meta-data you give, and manipulate them, but it will not interpret them for you. You can set several meta-data by repeating calls to set-meta
, and you can display the meta-data attached to a dataset using the command show
. Here is an example:
QSoas> generate-buffer 0 10 QSoas> set-meta pH 7.5 QSoas> set-meta sample "My sample" QSoas> show 0 Dataset generated.dat: 2 cols, 1000 rows, 1 segments, #0 Flags: Meta-data: pH = 7.5 sample = My sampleNote here the use of quotes around
My sample
since there is a space inside the value.
Using meta-data
There are many ways to use meta-data in QSoas. In this post, we will discuss just one: using meta-data in the output file. The output file can collect data from several commands, like peak data, statistics and so on. For instance, each time the command 1
is run, a line with the information about the largest peak of the current dataset is written to the output file. It is possible to automatically add meta-data to those lines by using the /meta=
option of the output
command. Just listing the names of the meta-data will add them to each line of the output file.
As a full example, we'll see how one can take advantage of meta-data to determine the position of the peak of the function \(x^2 \exp (-a\,x)\) depends on \(a\). For that, we first create a script that generates the function for a certain value of \(a\), sets the meta-data a
to the corresponding value, and find the peak. Let's call this file do-one.cmds
(all the script files can be found in the GitHub repository):
generate-buffer 0 20 x**2*exp(-x*$ 1 ) set-meta a $ 1 1This script takes a single argument, the value of \(a\), generates the appropriate dataset, sets the meta-data
a
and writes the data about the largest (and only in this case) peak to the output file. Let's now run this script with 1
as an argument:
QSoas> @ do-one.cmds 1This command generates a file
out.dat
containing the following data:
## buffer what x y index width left_width right_width area generated.dat max 2.002002002 0.541340590883 100 3.4034034034 1.24124124124 2.162162162161.99999908761This gives various information about the peak found: the name of the dataset it was found in, whether it's a maximum or minimum, the x and y positions of the peak, the index in the file, the widths of the peak and its area. We are interested here mainly in the
x
position.
Then, we just run this script for several values of \(a\) using run-for-each
, and in particular the option /range-type=lin
that makes it interpret values like 0.5..5:80
as 80 values evenly spread between 0.5 and 5. The script is called
run-all.cmds
:
output peaks.dat /overwrite=true /meta=a run-for-each do-one.cmds /range-type=lin 0.5..5:80 V all /style=red-to-blueThe first line sets up the output to the output file
peaks.dat
. The option /meta=a
makes sure the meta a
is added to each line of the output file, and /overwrite=true
make sure the file is overwritten just before the first data is written to it, in order to avoid accumulating the results of different runs of the script. The last line just displays all the curves with a color gradient. It looks like this:
Running this script (with @ run-all.cmds
) creates a new file peaks.dat
, whose first line looks like this:
## buffer what x y index width left_width right_width area aThe column
x
(the 3rd) contains the position of the peaks, and the column a
(the 10th) contains the meta a
(this column wasn't present in the output we described above, because we had not used yet the output /meta=a
command). Therefore, to load the peak position as a function of a
, one has just to run:
QSoas> load peaks.dat /columns=10,3This looks like this: Et voil ! To train further, you can:
[1] this is not exactly true. For instance, some commands like unwrap
interpret the sr
meta-data as a voltammetric scan rate if it is present. But this is the exception.
About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 2.2. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.
Next.